With the increased use of machine learning systems for decision making, questions about the fairness properties of such systems start to take center stage. Most existing work on algorithmic fairness assume complete observation of features at prediction time, as is the case for popular notions like statistical parity and equal opportunity. However, this is not sufficient for models that can make predictions with partial observation as we could miss patterns of bias and incorrectly certify a model to be fair. To address this, a recently introduced notion of fairness asks whether the model exhibits any discrimination pattern, in which an individual characterized by (partial) feature observations, receives vastly different decisions merely by disclosing one or more sensitive attributes such as gender and race. By explicitly accounting for partial observations, this provides a much more fine-grained notion of fairness. In this paper, we propose an algorithm to search for discrimination patterns in a general class of probabilistic models, namely probabilistic circuits. Previously, such algorithms were limited to naive Bayes classifiers which make strong independence assumptions; by contrast, probabilistic circuits provide a unifying framework for a wide range of tractable probabilistic models and can even be compiled from certain classes of Bayesian networks and probabilistic programs, making our method much more broadly applicable. Furthermore, for an unfair model, it may be useful to quickly find discrimination patterns and distill them for better interpretability. As such, we also propose a sampling-based approach to more efficiently mine discrimination patterns, and introduce new classes of patterns such as minimal, maximal, and Pareto optimal patterns that can effectively summarize exponentially many discrimination patterns
translated by 谷歌翻译
灾难性的过度拟合是在对抗性训练期间观察到的现象,具有快速梯度标志法(FGSM),其中测试稳健性在训练阶段仅在一个时期急剧下降。事先工作已经将这种损失归因于鲁棒性,以对输入空间的$ \ Texit {本地线性} $的急剧下降,并且已经证明了作为正则化术语的局部线性度量来防止灾难性的过度装备。使用简单的神经网络架构,我们通过实验证明维持高地的线性可能是$ \ texit {足够的} $来防止灾难性的过度装箱,但不是$ \ texit {必要的。} $进一步,由ParseVal网络启发,我们介绍了一个正规化术语与FGSM相处,使网络的权重矩阵正交,研究网络权重和局部线性的正交性之间的连接。最后,我们在对抗培训过程中识别$ \ Textit {Double Descent} $现象。
translated by 谷歌翻译
智能EHealth应用程序通过遥感,连续监控和数据分析为客户提供个性化和预防性的数字医疗服务。智能EHealth应用程序从多种模态感知输入数据,将数据传输到边缘和/或云节点,并使用计算密集型机器学习(ML)算法处理数据。连续的嘈杂输入数据,不可靠的网络连接,ML算法的计算要求以及传感器 - 边缘云层之间的计算放置选择会影响ML驱动的EHEADH应用程序的效率。在本章中,我们介绍了以优化的计算放置,准确性绩效权衡的探索以及用于ML驱动的EHEADH应用程序的跨层次感觉的合作式化的技术。我们通过传感器 - 边缘云框架进行客观疼痛评估案例研究,证明了在日常设置中智能eHealth应用程序的实际用例。
translated by 谷歌翻译
健康监测应用程序越来越依赖机器学习技术来学习日常环境中的最终用户生理和行为模式。考虑到可穿戴设备在监视人体参数中的重要作用,可以利用在设备学习中为行为和生理模式构建个性化模型,并同时为用户提供数据隐私。但是,大多数这些可穿戴设备的资源限制都阻止了对它们进行在线学习的能力。为了解决这个问题,需要从算法的角度重新考虑机器学习模型,以适合在可穿戴设备上运行。高维计算(HDC)为资源受限设备提供了非常适合的设备学习解决方案,并为隐私保护个性化提供了支持。我们的基于HDC的方法具有灵活性,高效率,弹性和性能,同时可以实现设备个性化和隐私保护。我们使用三个案例研究评估方法的功效,并表明我们的系统将培训的能源效率提高了高达$ 45.8 \ times $,与最先进的深神经网络(DNN)算法相比准确性。
translated by 谷歌翻译
科学数据的一套简洁且可衡量的公平(可访问,可互操作和可重复使用的)原则正在转变用于数据管理和管理的最新实践,以支持和支持发现和创新。从这项计划中学习,并承认人工智能(AI)在科学和工程实践中的影响,我们为AI模型引入了一套实用,简洁和可衡量的公平原则。我们展示了如何在统一的计算框架内创建和共享公平的数据和AI模型,结合了以下要素:Argonne国家实验室的高级光子源,材料数据设施,科学数据和学习中心,Funcx和Argonne Leadersition的数据和学习中心计算设施(ALCF),尤其是ALCF AI测试台的Thetagpu SuperCuputer和Sambanova Datascale系统。我们描述了如何利用这种域 - 不足的计算框架来实现自主AI驱动的发现。
translated by 谷歌翻译